Information through a Spiking Neuron

نویسندگان

  • Charles F. Stevens
  • Anthony M. Zador
چکیده

While it is generally agreed that neurons transmit information about their synaptic inputs through spike trains, the code by which this information is transmitted is not well understood. An upper bound on the information encoded is obtained by hypothesizing that the precise timing of each spike conveys information. Here we develop a general approach to quantifying the information carried by spike trains under this hypothesis, and apply it to the leaky integrate-and-fire (IF) model of neuronal dynamics. We formulate the problem in terms of the probability distribution peT) of interspike intervals (ISIs), assuming that spikes are detected with arbitrary but finite temporal resolution . In the absence of added noise, all the variability in the ISIs could encode information, and the information rate is simply the entropy of the lSI distribution, H (T) = (-p(T) log2 p(T)}, times the spike rate. H (T) thus provides an exact expression for the information rate . The methods developed here can be used to determine experimentally the information carried by spike trains, even when the lower bound of the information rate provided by the stimulus reconstruction method is not tight. In a preliminary series of experiments, we have used these methods to estimate information rates of hippocampal neurons in slice in response to somatic current injection. These pilot experiments suggest information rates as high as 6.3 bits/spike. 1 Information rate of spike trains Cortical neurons use spike trains to communicate with other neurons. The output of each neuron is a stochastic function of its input from the other neurons. It is of interest to know how much each neuron is telling other neurons about its inputs. How much information does the spike train provide about a signal? Consider noise net) added to a signal set) to produce some total input yet) = set) + net). This is then passed through a (possibly stochastic) functional F to produce the output spike train F[y(t)] --+ z(t). We assume that all the information contained in the spike train can be represented by the list of spike times; that is, there is no extra information contained in properties such as spike height or width. Note, however, that many characteristics of the spike train such as the mean or instantaneous rate 76 C. STEVENS, A. ZADOR can be derived from this representation; if such a derivative property turns out to be the relevant one, then this formulation can be specialized appropriately. We will be interested, then, in the mutual information 1(S(t); Z(t» between the input signal ensemble S(t) and the output spike train ensemble Z(t) . This is defined in terms of the entropy H(S) of the signal, the entropy H(Z) of the spike train, and their joint entropy H(S, Z), 1(S; Z) = H(S) + H(Z) H(S, Z). (1) Note that the mutual information is symmetric, 1(S; Z) = 1(Z; S), since the joint entropy H(S, Z) = H(Z, S). Note also that if the signal S(t) and the spike train Z(t) are completely independent, then the mutual information is 0, since the joint entropy is just the sum of the individual entropies H(S, Z) = H(S) + H(Z). This is completely in lin'e with our intuition, since in this case the spike train can provide no information about the signal. 1.1 Information estimation through stimulus reconstruction Bialek and colleagues (Bialek et al., 1991) have used the reconstruction method to obtain a strict lower bound on the mutual information in an experimental setting. This method is based on an expression mathematically equivalent to eq. (1) involving the conditional entropy H(SIZ) of the signal given the spike train, 1(S; Z) H(S) H(SIZ) > H(S) Hest(SIZ), (2) where Hest(SIZ) is an upper bound on the conditional entropy obtained from a reconstruction sest< t) of the signal. The entropy is estimated from the second order statistics of the reconstruction error e(t) ~ s(t)-sest (t); from the maximum entropy property of the Gaussian this is an upper bound. Intuitively, the first equation says that the information gained about the spike train by observing the stimulus is just the initial uncertainty of the signal (in the absence of knowledge of the spike train) minus the uncertainty that remains about the signal once the spike train is known, and the second equation says that this second uncertainty must be greater for any particular estimate than for the optimal estimate. 1.2 Information estimation through spike train reliability We have adopted a different approach based an equivalent expression for the mutual information: 1(S; Z) = H(Z) H(ZIS). (3) The first term H(Z) is the entropy of the spike train, while the second H(ZIS) is the conditional entropy of the spike train given the signal; intuitively this like the inverse repeatability of the spike train given repeated applications of the same signal. Eq. (3) has the advantage that, if the spike train is a deterministic function of the input, it permits exact calculation of the mutual information . This follows from an important difference between the conditional entropy term here and in eq. 2: whereas H(SIZ) has both a deterministic and a stochastic component , H(ZIS) has only a stochastic component. Thus in the absence of added noise, the discrete entropy H(ZIS) = 0, and eq. (3) reduces to 1(S; Z) = H(Z). If ISIs are independent, then the H(Z) can be simply expressed in terms of the entropy of the (discrete) lSI distribution p(T),

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Spiking AGREL

Spiking neural networks are characterised by the spiking neuron models they use and how these spiking neurons process information communicated through spikes – the neural code. We demonstrate a plausible spiking neural network based on Spike Response Models and predictive spike-coding. When combined with a plausible reinforcement learning strategy – Attention Gated REinforcement Learning (AGREL...

متن کامل

Chapter 7 LEARNING MECHANISMS IN NETWORKS OF SPIKING NEURONS

In spiking neural networks, signals are transferred by action potentials. The information is encoded in the patterns of neuron activities or spikes. These features create significant differences between spiking neural networks and classical neural networks. Since spiking neural networks are based on spiking neuron models that are very close to the biological neuron model, many of the principles...

متن کامل

Training of spiking neural networks based on information theoretic costs

Spiking neural network is a type of artificial neural network in which neurons communicate between each other with spikes. Spikes are identical Boolean events characterized by the time of their arrival. A spiking neuron has internal dynamics and responds to the history of inputs as opposed to the current inputs only. Because of such properties a spiking neural network has rich intrinsic capabil...

متن کامل

On the Complexity of Learning for Spiking Neurons with Temporal Coding

Spiking neurons are models for the computational units in biological neural systems where information is considered to be encoded mainly in the temporal patterns of their activity. In a network of spiking neurons a new set of parameters becomes relevant which has no counterpart in traditional neural network models: the time that a pulse needs to travel through a connection between two neurons (...

متن کامل

Synchronization Properties of Pulse-Coupled Resonate-and-Fire Neuron Circuits and Their Application

Introduction: Recent advances in neuroscience suggest that spiking neurons in cortical area are classified into two categories: integrators and resonators [1]. These neurons are different in the sensitivity to the timing of stimulus, and therefore show different synchronization properties in pulse-coupled systems. It is interested that the distribution of such neurons are layer-specific and dep...

متن کامل

Pulsed neural networks consisting of single-flux-quantum spiking neurons

An inhibitory pulsed neural network was developed for brain-like information processing, by using single-flux-quantum (SFQ) circuits. It consists of spiking neuron devices that are coupled to each other through all-to-all inhibitory connections. The network selects neural activity. The operation of the neural network was confirmed by computer simulation. SFQ neuron devices can imitate the opera...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1995